Onwards!
We have roughly equal distribution of clients in the network at genesis.
In the plots below, we align on the y-axis validators activated at genesis. A point on the plot is coloured in green when the validator has managed to get their attestation included for the epoch given on the x-axis. Otherwise, the point is coloured in red. Note that we do not check for the correctness of the attestation, merely its presence in some block of the beacon chain.
The plots allow us to check when a particular client is experiencing issues, at which point some share of validators of that client will be unable to publish their attestations.
A block can include at most 128 aggregate attestations. How many aggregate attestations did each client include on average?
Smaller blocks lead to healthier network, as long as they do not leave attestations aside. We check how each client manages redundancy in the next sections.
Myopic redundant aggregates were already published, with the same attesting indices, in a previous block.
Subset aggregates are aggregates included in a block which are fully covered by another aggregate included in the same block. Namely, when aggregate 1 has attesting indices \(I\) and aggregate 2 has attesting indices \(J\), aggregate 1 is a subset aggregate when \(I \subset J\).
Lighthouse and Nimbus both score a perfect 0.
We first look at the reward rates per client since genesis.
Clients are hosted on AWS nodes scattered across four regions in roughly equal proportions. We look at the reward rates per region.
Performing an omnibus test to detect significant difference between any of the four groups, we are unable to find such significance.
Around epoch 1020, nodes from regions 1 and 2 were scaled down from t3.xlarge units (4cpu 16GB mem, with unlimited cpu burst) to m5.large units (2cpu, 8GB mem, no burst). We observe a significant loss of performance despite continuous uptime.
Reward rates per client are affected in roughly equal proportions.
We look at four metrics across each region:
To obtain a time series, we divide the period between epoch 900 and epoch 1250 in chunks of size 10 epochs. For each validator, we record how many included attestations appear in the dataset (ranging between 0 and 10 for each chunk), the number of correct targets, correct heads and its average inclusion delay. We average over all validators in the EF-controlled set, measuring metrics either per client or per region.
We start by looking at the metrics per region.
And now per client.